162 research outputs found
Adaptively Secure Coin-Flipping, Revisited
The full-information model was introduced by Ben-Or and Linial in 1985 to
study collective coin-flipping: the problem of generating a common bounded-bias
bit in a network of players with faults. They showed that the
majority protocol can tolerate adaptive corruptions, and
conjectured that this is optimal in the adaptive setting. Lichtenstein, Linial,
and Saks proved that the conjecture holds for protocols in which each player
sends a single bit. Their result has been the main progress on the conjecture
in the last 30 years.
In this work we revisit this question and ask: what about protocols involving
longer messages? Can increased communication allow for a larger fraction of
faulty players?
We introduce a model of strong adaptive corruptions, where in each round, the
adversary sees all messages sent by honest parties and, based on the message
content, decides whether to corrupt a party (and intercept his message) or not.
We prove that any one-round coin-flipping protocol, regardless of message
length, is secure against at most strong adaptive
corruptions. Thus, increased message length does not help in this setting.
We then shed light on the connection between adaptive and strongly adaptive
adversaries, by proving that for any symmetric one-round coin-flipping protocol
secure against adaptive corruptions, there is a symmetric one-round
coin-flipping protocol secure against strongly adaptive corruptions.
Returning to the standard adaptive model, we can now prove that any symmetric
one-round protocol with arbitrarily long messages can tolerate at most
adaptive corruptions.
At the heart of our results lies a novel use of the Minimax Theorem and a new
technique for converting any one-round secure protocol into a protocol with
messages of bits. This technique may be of independent interest
Bipartite Perfect Matching in Pseudo-Deterministic NC
We present a pseudo-deterministic NC algorithm for finding perfect matchings in bipartite graphs. Specifically, our algorithm is a randomized parallel algorithm which uses poly(n) processors, poly(log n) depth, poly(log n) random bits, and outputs for each bipartite input graph a unique perfect matching with high probability. That is, on the same graph it returns the same matching for almost all choices of randomness. As an immediate consequence we also find a pseudo-deterministic NC algorithm for constructing a depth first search (DFS) tree. We introduce a method for computing the union of all min-weight perfect matchings of a weighted graph in RNC and a novel set of weight assignments which in combination enable isolating a unique matching in a graph.
We then show a way to use pseudo-deterministic algorithms to reduce the number of random bits used by general randomized algorithms. The main idea is that random bits can be reused by successive invocations of pseudo-deterministic randomized algorithms. We use the technique to show an RNC algorithm for constructing a depth first search (DFS) tree using only O(log^2 n) bits whereas the previous best randomized algorithm used O(log^7 n), and a new sequential randomized algorithm for the set-maxima problem which uses fewer random bits than the previous state of the art.
Furthermore, we prove that resolving the decision question NC = RNC, would imply an NC algorithm for finding a bipartite perfect matching and finding a DFS tree in NC. This is not implied by previous randomized NC search algorithms for finding bipartite perfect matching, but is implied by the existence of a pseudo-deterministic NC search algorithm
The Complexity of Problems in P Given Correlated Instances
Instances of computational problems do not exist in isolation. Rather, multiple and correlated instances of the same problem arise naturally in the real world. The challenge is how to gain computationally from correlations when they can be found.
[DGH, ITCS 2015] showed that significant computational gains can be made by having access to auxiliary instances which are correlated to the primary problem instance via the solution space. They demonstrate this for constraint satisfaction problems, which are NP-hard in the general worst case form.
Here, we set out to study the impact of having access to correlated instances on the complexity of polynomial time problems. Namely, for a problem P that is conjectured to require time n^c for c>0,
we ask whether access to a few instances of P that are correlated in some natural way can be used to solve P on one of them (the designated "primary instance") faster than the conjectured lower bound of n^c.
We focus our attention on a number of problems: the Longest Common Subsequence (LCS), the minimum Edit Distance between sequences, and Dynamic Time Warping Distance (DTWD) of curves, for all of which the best known algorithms achieve O(n^2/polylog(n)) runtime via dynamic programming. These problems form an interesting case in point to study, as it has been shown that a O(n^(2 - epsilon)) time algorithm for a worst-case instance would imply improved algorithms
for a host of other problems as well as disprove complexity hypotheses such as the Strong Exponential Time Hypothesis.
We show how to use access to a logarithmic number of auxiliary correlated instances, to design novel o(n^2) time algorithms for LCS, EDIT, DTWD, and more generally improved algorithms for computing any tuple-based similarity measure - a generalization which we define within on strings. For the multiple sequence alignment problem on k strings, this yields an O(nklog n) algorithm
contrasting with classical O(n^k) dynamic programming.
Our results hold for several correlation models between the primary and the auxiliary instances. In the most general correlation model we address, we assume that the primary instance is a worst-case instance and the auxiliary instances are chosen with uniform distribution subject to the constraint that their alignments are
epsilon-correlated with the optimal alignment of the primary instance. We emphasize that optimal solutions for the auxiliary instances will not generally coincide with optimal solutions for the worst case primary instance.
We view our work as pointing out a new avenue for looking for significant improvements for sequence alignment problems and
computing similarity measures, by taking advantage of access to sequences which are correlated through natural generating processes.
In this first work we show how to take advantage of mathematically inspired simple clean models of correlation - the intriguing question, looking forward, is to find correlation models which coincide with evolutionary models and other relationships and for which our approach to multiple sequence alignment gives provable guarantees
Pseudo-Deterministic Streaming
A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, approximate counting, ?_2 approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm A, there exists a stream x so that running A twice on x (using different randomness) would with high probability result in two different entries as the output.
In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for ?_2 norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix.
We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted
Near-Linear Time Insertion-Deletion Codes and (1+)-Approximating Edit Distance via Indexing
We introduce fast-decodable indexing schemes for edit distance which can be
used to speed up edit distance computations to near-linear time if one of the
strings is indexed by an indexing string . In particular, for every length
and every , one can in near linear time construct a string
with , such that, indexing
any string , symbol-by-symbol, with results in a string where for which edit
distance computations are easy, i.e., one can compute a
-approximation of the edit distance between and any other
string in time.
Our indexing schemes can be used to improve the decoding complexity of
state-of-the-art error correcting codes for insertions and deletions. In
particular, they lead to near-linear time decoding algorithms for the
insertion-deletion codes of [Haeupler, Shahrasbi; STOC `17] and faster decoding
algorithms for list-decodable insertion-deletion codes of [Haeupler, Shahrasbi,
Sudan; ICALP `18]. Interestingly, the latter codes are a crucial ingredient in
the construction of fast-decodable indexing schemes
08491 Executive Summary -- Theoretical Foundations of Practical Information Security
Designing, building, and operating secure information processing
systems is a complex task, and the only scientific way to address the
diverse challenges arising throughout the life-cycle of security
criticial systems is to consolidate and increase the knowledge of the
theoretical foundations of practical security problems. To this aim,
the mutual exchange of ideas across individual security research
communities can be extraordinary beneficial. Accordingly, the
motivation of this Dagstuhl seminar was the integration of different
research areas with the common goal of providing an integral
theoretical basis that is needed for the design of secure information
processing systems
Pseudo-Deterministic Proofs
We introduce pseudo-deterministic interactive proofs (psdIP): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover.
We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier.
Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one.
The IP=PSPACE characterization implies that psdIP = IP. The challenge is to find constant round pseudo-deterministic interactive proofs for hard search problems. We show a constant round pseudo-deterministic interactive proof for the graph isomorphism problem: on any input pair of isomorphic graphs (G_0,G_1), there exist a unique isomorphism phi from G_0 to G_1 (although many isomorphism many exist) which will be output by the verifier with high probability, regardless of any dishonest prover strategy.
In contrast, we show that it is unlikely that psdIP proofs with constant rounds exist for NP-complete problems by showing that if any NP-complete problem has a constant round psdIP protocol, then the polynomial hierarchy collapses
08491 Abstracts Collection -- Theoretical Foundations of Practical Information Security
From 30.11. to 05.12.2008, the Dagstuhl Seminar 08491 ``Theoretical Foundations of Practical Information Security \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
- …